Goto

Collaborating Authors

 faster r-cnn


PrObeD: Proactive Object Detection Wrapper

Neural Information Processing Systems

These works are regarded as passive works for object detection as they take the input image as is. However, convergence to global minima is not guaranteed to be optimal in neural networks; therefore, we argue that the trained weights in the object detector are not optimal. To rectify this problem, we propose a wrapper based on proactive schemes, PrObeD, which enhances the performance of these object detectors by learning a signal. PrObeD consists of an encoder-decoder architecture, where the encoder network generates an image-dependent signal termed templates to encrypt the input images, and the decoder recovers this template from the encrypted images. We propose that learning the optimum template results in an object detector with an improved detection performance. The template acts as a mask to the input images to highlight semantics useful for the object detector. Finetuning the object detector with these encrypted images enhances the detection performance for both generic and camouflaged.


RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder

Neural Information Processing Systems

Existing object detection frameworks are usually built on a single format of object/part representation, i.e., anchor/proposal rectangle boxes in RetinaNet and Faster R-CNN, center points in FCOS and RepPoints, and corner points in CornerNet. While these different representations usually drive the frameworks to perform well in different aspects, e.g., better classification or finer localization, it is in general difficult to combine these representations in a single framework to make good use of each strength, due to the heterogeneous or non-grid feature extraction by different representations.


A Physics-Constrained, Design-Driven Methodology for Defect Dataset Generation in Optical Lithography

Hu, Yuehua, Kong, Jiyeong, Shin, Dong-yeol, Kim, Jaekyun, Kang, Kyung-Tae

arXiv.org Artificial Intelligence

The efficacy of Artificial Intelligence (AI) in micro/nano manufacturing is fundamentally constrained by the scarcity of high-quality and physically grounded training data for defect inspection. Lithography defect data from semiconductor industry are rarely accessible for research use, resulting in a shortage of publicly available datasets. To address this bottleneck in lithography, this study proposes a novel methodology for generating large-scale, physically valid defect datasets with pixel-level annotations. The framework begins with the ab initio synthesis of defect layouts using controllable, physics-constrained mathematical morphology operations (erosion and dilation) applied to the original design-level layout. These synthesized layouts, together with their defect-free counterparts, are fabricated into physical samples via high-fidelity digital micromirror device (DMD)-based lithography. Optical micrographs of the synthesized defect samples and their defect-free references are then compared to create consistent defect delineation annotations. Using this methodology, we constructed a comprehensive dataset of 3,530 Optical micrographs containing 13,365 annotated defect instances including four classes: bridge, burr, pinch, and contamination. Each defect instance is annotated with a pixel-accurate segmentation mask, preserving full contour and geometry. The segmentation-based Mask R-CNN achieves AP@0.5 of 0.980, 0.965, and 0.971, compared with 0.740, 0.719, and 0.717 for Faster R-CNN on bridge, burr, and pinch classes, representing a mean AP@0.5 improvement of approximately 34%. For the contamination class, Mask R-CNN achieves an AP@0.5 roughly 42% higher than Faster R-CNN. These consistent gains demonstrate that our proposed methodology to generate defect datasets with pixel-level annotations is feasible for robust AI-based Measurement/Inspection (MI) in semiconductor fabrication.






REVIVE: Regional Visual Representation Matters in Knowledge-Based Visual Question Answering (Supplementary Materials) A Overview

Neural Information Processing Systems

In the supplementary materials, we provide the following sections: (a) Implementation details of implicit knowledge retrieval in Section B. (b) Ablation study experiments in Section C. (c) Visualization results in Section D. We first describe more implementation details of implicit knowledge retrieval of the proposed REVIVE . Specifically, we explain how we extract multiple answer candidates. PICa's multi-query ensemble approach, we take all these In our experiments, we just retrieve 5 ( i.e., When using only one implicit knowledge candidate, the model can achieve 55.8% accuracy, However, when the retrieved candidate number is 8, we can see that the performance isn't the best, Table 3: Ablation study on using different object detectors. R-CNN (R101) mean using ResNet-50 [2] and ResNet-101 [2] as backbones. We can see that Faster R-CNN with ResNet-50 and ResNet-101 as the backbone can achieve 55.3% and 55.6% accuracy respectively, and using the GLIP as the object detector can achieve the optimal Conference on Computer Vision and Pattern Recognition, pages 3195-3204, 2019. 2 Figure 1: The implicit knowledge retrieval visualization results without and with the proposed regional descriptions/tags.